22 research outputs found

    Criteria for the use of omics-based predictors in clinical trials.

    Get PDF
    The US National Cancer Institute (NCI), in collaboration with scientists representing multiple areas of expertise relevant to 'omics'-based test development, has developed a checklist of criteria that can be used to determine the readiness of omics-based tests for guiding patient care in clinical trials. The checklist criteria cover issues relating to specimens, assays, mathematical modelling, clinical trial design, and ethical, legal and regulatory aspects. Funding bodies and journals are encouraged to consider the checklist, which they may find useful for assessing study quality and evidence strength. The checklist will be used to evaluate proposals for NCI-sponsored clinical trials in which omics tests will be used to guide therapy

    Criteria for the use of omics-based predictors in clinical trials: explanation and elaboration

    Full text link
    Abstract High-throughput ‘omics’ technologies that generate molecular profiles for biospecimens have been extensively used in preclinical studies to reveal molecular subtypes and elucidate the biological mechanisms of disease, and in retrospective studies on clinical specimens to develop mathematical models to predict clinical endpoints. Nevertheless, the translation of these technologies into clinical tests that are useful for guiding management decisions for patients has been relatively slow. It can be difficult to determine when the body of evidence for an omics-based test is sufficiently comprehensive and reliable to support claims that it is ready for clinical use, or even that it is ready for definitive evaluation in a clinical trial in which it may be used to direct patient therapy. Reasons for this difficulty include the exploratory and retrospective nature of many of these studies, the complexity of these assays and their application to clinical specimens, and the many potential pitfalls inherent in the development of mathematical predictor models from the very high-dimensional data generated by these omics technologies. Here we present a checklist of criteria to consider when evaluating the body of evidence supporting the clinical use of a predictor to guide patient therapy. Included are issues pertaining to specimen and assay requirements, the soundness of the process for developing predictor models, expectations regarding clinical study design and conduct, and attention to regulatory, ethical, and legal issues. The proposed checklist should serve as a useful guide to investigators preparing proposals for studies involving the use of omics-based tests. The US National Cancer Institute plans to refer to these guidelines for review of proposals for studies involving omics tests, and it is hoped that other sponsors will adopt the checklist as well.http://deepblue.lib.umich.edu/bitstream/2027.42/134536/1/12916_2013_Article_1104.pd

    Criteria for the use of omics-based predictors in clinical trials: Explanation and elaboration

    Get PDF
    High-throughput 'omics' technologies that generate molecular profiles for biospecimens have been extensively used in preclinical studies to reveal molecular subtypes and elucidate the biological mechanisms of disease, and in retrospective studies on clinical specimens to develop mathematical models to predict clinical endpoints. Nevertheless, the translation of these technologies into clinical tests that are useful for guiding management decisions for patients has been relatively slow. It can be difficult to determine when the body of evidence for an omics-based test is sufficiently comprehensive and reliable to support claims that it is ready for clinical use, or even that it is ready for definitive evaluation in a clinical trial in which it may be used to direct patient therapy. Reasons for this difficulty include the exploratory and retrospective nature of many of these studies, the complexity of these assays and their application to clinical specimens, and the many potential pitfalls inherent in the development of mathematical predictor models from the very high-dimensional data generated by these omics technologies. Here we present a checklist of criteria to consider when evaluating the body of evidence supporting the clinical use of a predictor to guide patient therapy. Included are issues pertaining to specimen and assay requirements, the soundness of the process for developing predictor models, expectations regarding clinical study design and conduct, and attention to regulatory, ethical, and legal issues. The proposed checklist should serve as a useful guide to investigators preparing proposals for studies involving the use of omics-based tests. The US National Cancer Institute plans to refer to these guidelines for review of proposals for studies involving omics tests, and it is hoped that other sponsors will adopt the checklist as well. © 2013 McShane et al.; licensee BioMed Central Ltd

    A Simulation Based Evaluation of Sample Size Methods for Biomarker Studies

    Get PDF
    Cancer researchers are often interested in identifying biomarkers that are indicative of poor outcomes (prognostic biomarkers) or response to specific therapies (predictive biomarkers). In designing a biomarker study, the first statistical issue encountered is the sample size requirement for adequate detection of a biomarker effect. In biomarker studies, the desired effect size is typically larger than those targeted in therapeutic trials and the biomarker prevalence is rarely near the optimal 50%. In this article, we review sample size formulas that are routinely used in designing therapeutic trials. We then conduct simulation studies to evaluate the performances of these methods when applied to biomarker studies. In particular, we examine the impact that deviations from certain statistical assumptions (i.e., biomarker positive prevalence and effect size) have on statistical power and type I error. Our simulation results indicate that when the true biomarker prevalence is close to 50%, all methods perform well in terms of power regardless of the magnitude of the targeted biomarker effect. However, when the biomarker positive prevalence rate deviates from 50%, the empirical power based on some existing methods may be substantially different from the nominal power, and this discrepancy becomes more profound for large biomarker effects. The type I error is maintained close to the 5% nominal level in all scenarios we investigate, although there is a slight inflation as the targeted effect size increases. Based on these results, we delineate the range of parameters within which the use of some sample size methods may be sufficiently robust

    Reply to B. Zhao et al

    No full text
    corecore